From coincidences to discoveries 1 Running head: From coincidences to discoveries From mere coincidences to meaningful discoveries
نویسندگان
چکیده
People’s reactions to coincidences are often cited as an illustration of the irrationality of human reasoning about chance. We argue that coincidences may be better understood in terms of rational statistical inference, based on their functional role in processes of causal discovery and theory revision. We present a formal definition of coincidences in the context of a Bayesian framework for causal induction: a coincidence is an event that provides support for an alternative to a currently favored causal theory, but not necessarily enough support to accept that alternative in light of its low prior probability. We test the qualitative and quantitative predictions of this account through a series of experiments that examine the transition from coincidence to evidence, the correspondence between the strength of coincidences and the statistical support for causal structure, and the relationship between causes and coincidences. Our results indicate that people can accurately assess the strength of coincidences, suggesting that irrational conclusions drawn from coincidences are the consequence of overestimation of the plausibility of novel causal forces. We discuss the implications of our account for understanding the role of coincidences in theory change. From coincidences to discoveries 3 From mere coincidences to meaningful discoveries In the last days of August in 1854, the city of London was hit by an unusually violent outbreak of cholera. More than 500 people died over the next fortnight, most of them in a small region in Soho. On September 3, this epidemic caught the attention of John Snow, a physician who had recently begun to argue against the widespread notion that cholera was transmitted by bad air. Snow immediately suspected a water pump on Broad Street as the cause, but could find little evidence of contamination. However, on collecting information about the locations of the cholera victims, he discovered that they were tightly clustered around the pump. This suspicious coincidence hardened his convictions, and the pump handle was removed. The disease did not spread any further, furthering Snow’s (1855) argument that cholera was caused by infected water. Observing clusters of events in the streets of London does not always result in important discoveries. Towards the end of World War II, London came under bombardment by German V-1 and V-2 flying bombs. It was widespread popular belief that these bombs were landing in clusters, with an unusual number of bombs landing on the poorer parts of the city (Johnson, 1981). After the war, R. D. Clarke of the Prudential Assurance Company set out to ‘apply a statistical test to discover whether any support could be found for this allegation’ (Clarke, 1946, p. 481). Clarke examined 144 square miles of south London, in which 537 bombs had fallen. He divided this area into small squares and counted the number of bombs falling in each square. If the bombs fell uniformly over this area, then these counts should conform to the Poisson distribution. Clarke found that this was indeed the case, and concluded that his result ‘lends no support to the clustering hypothesis’ (1946, p. 481), implying that people had been misled by their intuitions.1 Taken together, the suspicious coincidence noticed by John Snow and the mere coincidence that fooled the citizens of London present what seems to be a paradox for theories of human reasoning. How can coincidences simultaneously be the source of both important scientific discoveries and widespread false beliefs? Previous research has tended to focus on only one of From coincidences to discoveries 4 these two faces of coincidences. Inspired by examples similar to that of Snow,2 one approach has focused on conceptual analyses or quantitative measures of coincidences that explicate their role in rational inference (Horwich, 1982; Schlesinger, 1991), causal discovery (Owens, 1992) and scientific argument (Hacking, 1983). An alternative approach, inspired by examples like the bombing of London,3 has analyzed the sense of coincidence as a prime example of shortcomings in human understanding of chance and statistical inference (Diaconis & Mosteller, 1989; Fisher, 1937; Gilovich, 1993; Plous, 1993). Neither of these two traditions has attempted to explain how the same cognitive phenomenon can simultaneously be the force driving human reasoning to both its greatest heights, in scientific discovery, and its lowest depths, in superstition and other abiding irrationalities. In this paper, we develop a framework for understanding coincidences as a functional element of causal discovery. Scientific knowledge is expanded and revised through the discovery of causal relationships that enrich or invalidate existing theories. Intuitive knowledge can also be described in terms of domain theories with structures that are analogous to scientific theories in important respects (Carey, 1985; Gopnik & Meltzoff, 1997; Karmiloff-Smith, 1988; Keil, 1989; Murphy & Medin, 1985), and these intuitive theories are grown, elaborated and revised in large part through processes of causal discovery (Gopnik, Glymour, Sobel, Schulz, Kushnir, & Danks, 2004; Tenenbaum, Griffiths, & Niyogi, in press). We will argue that coincidences play a crucial role in the development of both scientific and intuitive theories, as events that provide support for a low-probability alternative to a currently favored causal theory. This definition can be made precise using the mathematics of statistical inference. We use the formal language of causal graphical models (Pearl, 2000; Spirtes, Glymour, & Schienes, 1993) to characterize relevant aspects of intuitive causal theories, and the tools of Bayesian statistics to propose a measure of evidential support for alternative causal theories that can be identified with the strength of a coincidence. This approach allows us to clarify the relationship between coincidences and theory change, and to make quantitative predictions about the strength of coincidences that can be compared with human From coincidences to discoveries 5 judgments. The plan of the paper is as follows. Before presenting our account, we first critique the common view of coincidences as simply unlikely events. This analysis of coincidences is simple and widespread, but ultimately inadequate because it fails to recognize the importance of alternative theories in determining what constitutes a coincidence. We then present a formal analysis of the computational problem underlying causal induction, and use this analysis to show how coincidences may be viewed as events that provide strong but not necessarily sufficient evidence for an alternative to a current theory. After conducting an experimental test of the qualitative predictions of this account, we use it to make quantitative predictions about the strength of coincidences in some of the complex settings where classic examples of coincidences occur: coincidences in space, as in the examples of John Snow and the bombing of London, and coincidences in date, as in the famous “birthday problem”. We conclude by returning to the paradox of coincidences identified above, considering why coincidences often lead people astray and discussing their involvement in theory change. Coincidences are not just unlikely events Upon experiencing a coincidence, many people react by thinking something like ‘Wow! What are the chances of that?’ (e.g., Falk, 1981-1982). Subjectively, coincidences are unlikely events: we interpret our surprise at their occurrence as indicating that they have low probability. In fact, it is often assumed that being surprising and having low probability are equivalent: the mathematician Littlewood (1953) suggested that events having a probability of one in a million be considered surprising, and many psychologists make this assumption at least implicitly (e.g., Slovic & Fischhoff, 1977). The notion that coincidences are unlikely events pervades literature addressing the topic, irrespective of its origin. This belief is expressed in books on spirituality (‘Regardless of the details of a particular coincidence, we sense that it is too unlikely to have been the result of luck or mere chance,’ Redfield, 1998, p. 14), popular books on the mathematical basis From coincidences to discoveries 6 of everyday life (‘It is an event which seems so unlikely that it is worth telling a story about,’ Eastaway & Wyndham, 1998, p. 48), and even the statisticians Diaconis and Mosteller (1989) considered the definition ‘a coincidence is a rare event,’ but rejected it on the grounds that ‘this includes too much to permit careful study’ (p. 853). The most basic version of the idea that coincidences are unlikely events refers only to the probability of a single event. Thus, some data, d, might be considered a coincidence if the probability of d occurring by chance is small.4 On September 11, 2002, exactly one year after terrorists destroyed the World Trade Center in Manhattan, the New York State Lottery “Pick 3” competition, in which three numbers from 0-9 are chosen at random, produced the results 9-1-1 (Associated Press, September 12, 2002). This seems like a coincidence,5 and has reasonably low probability: the three digits were uniformly distributed between 0 and 9, so the probability of such a combination is ( 1 10) 3 or 1 in 1000. If d is a sequence of ten coinflips that are all heads, which we will denote HHHHHHHHHH, then its probability under a fair coin is ( 2) 10 or 1 in 1024. If d is an event in which one goes to a party and meets four people, all of whom are born on August 3, and we assume birthdays are uniformly distributed, then the probability of this event is ( 1 365) 4, or 1 in 17,748,900,625. Consistent with the idea that coincidences are unlikely events, these values are all quite small. The fundamental problem with this account is that while coincidences must in general be unlikely events, there are many unlikely events that are not coincidences. It is easy to find events that have the same probability, yet differ in whether we consider them a coincidence. In particular, all of the examples cited above were analyzed as outcomes of uniform generating processes, and so their low probability would be matched by any outcomes of the same processes with the same number of observations. For instance, a fair coin is no more or less likely to produce the outcome HHTHTTHTHT as the outcome HHHHHHHHHH. Likewise, observing the lottery numbers 7-2-3 on September 11 would be no more likely than observing 9-1-1, and meeting people with birthdays on May 14, July 8, August 21, and October 4, would be just as unlikely as any other sequence, From coincidences to discoveries 7 including August 3, August 3, August 3, and August 3. Using several other examples of this kind, Teigen and Keren (2003) provided empirical evidence in behavioral judgments for the weak relationship between the surprisingness of events and their probability. For our purposes, these examples are sufficient to establish that our sense of coincidence is not merely a result of low probability. We will argue that coincidences are not just unlikely events, but rather events that are less likely under our currently favored theory of how the world works than under an alternative theory. The September 11 lottery results, meeting four people with the same birthday, and flipping ten heads in a row all grab our attention because they suggest the existence of hidden causal structure in contexts where our current understanding would suggest no such structure should exist. Before we explore this hypothesis in detail, we should rule out a more sophisticated version of the idea that coincidences are unlikely events. The key innovation behind this definition is to move from evaluating the probability of a single event to the probability of an event of a certain “kind”, with coincidences being events of unlikely kinds. Hints of this view appear in experiments on coincidences conducted by Falk (1989), who suggested that people are ‘sensitive to the extension of the judged event’ (p. 489) when evaluating coincidences. Falk (1981-1982) also suggested that when one hears a story about a coincidence, ‘One is probably not encoding the story with all its specific details as told, but rather as a more general event “of that kind” ’ (p. 23). Similar ideas have been proposed by psychologists studying figural goodness and subjective randomness (e.g., Garner, 1970; Kubovy & Gilden, 1991), and such an account was worked out in detail by Schlesinger (1991), who explicitly considered coincidences in birthdays. Under this view, meeting four people all born on August 3 is a bigger coincidence than meeting those born on May 14, July 8, August 21, and October 4 because the former is of the kind all on the same day while the latter is of the kind all on different days. Similarly, the sequence of coinflips HHHHHHHHHH is more of a coincidence than the sequence HHTHTTHTHT because the former is of the kind all outcomes the same while the latter is of the kind equal number of heads and tails; out of all 1024 sequences of From coincidences to discoveries 8 length 10, only two are of the former kind, while there are 252 of the latter kind. The “unlikely kinds” definition runs into several difficulties. First there are the problems of specifying what might count as a kind of event, and which kind should be used when more than one is applicable. Like the coinflip sequence HHTHTTHTHT, the alternating sequence HTHTHTHTHT falls under the kind equal number of heads and tails, but it appears to present something of a coincidence while the former sequence does not. The “unlikely kinds” theory might explain this by saying that HTHTHTHTHT is also a member of a different kind, alternating heads and tails, containing only two sequences out of the possible 1024. But why should this second kind dominate? Intuitively, the fact that it is more specific seems important, but why? And why isn’t alternation as much of a coincidence as repetition, even though the kinds all outcomes the same and alternating heads and tails are equally specific? How would we assess the degree of coincidence for the sequence HHHHHHHTTT? It appears more coincidental than a merely “random” sequence like HHTHTTHTHT, but what “kind of event” is relevant? Finally, why do we not consider a kind like all outcomes that begin HHTHTTHTHT. . . , which would predict that the sequence HHTHTTHTHT is in fact the most coincidental of all? The situation becomes even more complex when we go beyond discrete events. For example, the bombing of London suggested a coincidence based upon bomb locations, which are not easily classified into kinds. For the “unlikely kinds” definition to work, we need to be able to identify the kinds relevant to any contexts, including those involving continuous stimuli. The difficulty of doing this is a consequence of not recognizing the role of alternative theories in determining what constitutes a coincidence. The fact that certain kinds of events seem natural is a consequence of the theory-ladenness of the observer: there is no a priori reason why any set of kinds should be favored over any other. In the cases where definitions in terms of unlikely kinds do seem to work, it is because the kinds being used implicitly correspond to the predictions of a reasonable set of alternative theories. To return to the coinflipping example, kinds defined in terms of the number of heads in a sequence implicitly correspond to considering a set of alternative theories that differ in From coincidences to discoveries 9 their claims about the probability that a coin comes up heads, a fact that we discuss in more detail below. Alternative theories still exist in contexts where no natural “kinds” can be found, providing greater generality for definitions of coincidences based upon alternative theories. Finally, even if a method for defining kinds seems clear, it is possible to find counterexamples to the idea that coincidences are events of unlikely kinds. For instance, a common way of explaining why a sequence like HHHH is judged less random (and more coincidental) than HHTT is that the former is of the kind four heads while the latter is of the kind two heads, two tails (c.f. Garner, 1970; Kubovy & Gilden, 1991). Since one is much more likely to obtain a sequence with two heads and two tails than a sequence with four heads when flipping a fair coin four times, the latter seems like a bigger coincidence. The probability of NH heads from N trials is Pkind(D) = ( N NH ) 2N , (1) so the probability of the four heads kind is ( 4 4 ) 24 = 0.0625, while the probability of the two heads, two tails kind is ( 4 2 ) 24 = 0.375. However, we can easily construct a sequence of a kind that has lower probability than four heads: the reasonably random HHHHTHTTHHHTHTHHTHTTHHH is but one example of the fifteen heads, eight tails kind, which has probability ( 23 15 ) 223 = 0.0584. Coincidence as statistical inference In addition to the problems outlined in the previous section, the definition of coincidences as unlikely events seems to neglect one of the key components of coincidences: their apparent meaningfulness. This is the aspect of coincidences that makes them so interesting, and is tied to their role in scientific discoveries. We will argue that the meaningfulness of coincidences is due to the fact that coincidences are not just arbitrary low-probability patterns, but patterns that suggest the existence of unexpected causal structure. One of the earliest statements of this idea appears in Laplace (1795/1951): From coincidences to discoveries 10 If we seek a cause wherever we perceive symmetry, it is not that we regard a symmetrical event as less possible than the others, but, since this event ought to be the effect of a regular cause or that of chance, the first of these suppositions is more probable than the second. On a table we see letters arranged in this order, C o n s t a n t i n o p l e, and we judge that this arrangement is not the result of chance, not because it is less possible than the others, for if this word were not employed in any language we should not suspect it came from any particular cause, but this word being in use among us, it is incomparably more probable that some person has thus arranged the aforesaid letters than that this arrangement is due to chance. (p. 16) In this passage, Laplace suggested that our surprise at orderly events is a result of the inference that these events are more likely under a process with causal structure than one based purely on chance, we should suspect that a cause was involved. The idea that coincidences are events that provide us with evidence for the existence of unexpected causal structure has been developed further by a number of authors. In the philosophy of science, Horwich (1982) defined a coincidence as ‘an unlikely accidental correspondence between independent facts, which suggests strongly, but in fact falsely, some causal relationship between them’ (p. 104), and expressed this idea formally using the language of Bayesian inference, as we do below. Similar ideas have been proposed by Bayesian statisticians, including Good (1956, 1984) and Jaynes (2003). In cognitive science, Feldman (2004) has explored an account of why simple patterns are surprising that is based upon the same principle, viewing events that exhibit greater simplicity than should be expected under a “null hypothesis” as coincidences. In the remainder of the paper, we develop a formal framework which allows us to make this definition of coincidences precise, and to test its quantitative predictions. Our focus is on the role of coincidences in causal induction. Causal induction has been studied extensively in both philosophy (e.g., Hume, 1739/1978) and psychology (e.g., Inhelder & Piaget, 1958). Detailed reviews of some of this history are provided by Shultz (1982; Shultz & Kestenbaum, 1985) and From coincidences to discoveries 11 White (1990). Recent research on human causal induction has focused on formal models based upon analyses of how an agent should learn about causal relationships (e.g., Anderson, 1990; Cheng, 1997; Griffiths & Tenenbaum, 2005; López, Cobos, Caño, & Shanks, 1998; Steyvers, Tenenbaum, Wagenmakers, & Blum, 2003). These formal models establish some of the groundwork necessary for our analysis of the functional role of coincidences. Any account of causal induction requires a means of representing hypotheses about candidate causal structures. We will represent these hypotheses using causal graphical models (also known as causal Bayesian networks or causal Bayes nets). Causal graphical models are a language for representing and reasoning about causal relationships that has been developed in computer science and statistics (Pearl, 2000; Spirtes, Glymour, & Schienes, 1993). This language has begun to play a role in theories of human causal reasoning (e.g., Danks & McKenzie, under revision; Gopnik et al., 2004; Glymour, 1998; 2001; Griffiths & Tenenbaum, 2005; Lagnado & Sloman, 2002; Rehder, 2003; Steyvers, Wagenmakers, Blum, & Tenenbaum, 2003; Tenenbaum & Griffiths, 2001, 2003; Waldmann & Martignon, 1998), and several theories of human causal induction can be expressed in terms of causal graphical models (Griffiths & Tenenbaum, 2005; Tenenbaum & Griffiths, 2001). A causal graphical model represents the causal relationships among a set of variables using a graph in which variables are nodes and causation is indicated with arrows. This graphical structure has implications for the probability of observing particular values for those variables, and for the consequences of interventions on the system (see Pearl, 2000, or Griffiths & Tenenbaum, 2005, for a more detailed introduction). A variety of algorithms exist for learning the structure of causal graphical models, based upon either reasoning from a pattern of statistical dependencies (e.g., Spirtes et al., 1993) or methods from Bayesian statistics (e.g., Heckerman, 1998). We will pursue the latter approach, treating theories as generators of causal graphical models: recipes for constructing a set of causal graphical models that describes the possible causal relationships among variables in a given situation. Theories thus specify the hypothesis spaces and prior From coincidences to discoveries 12 probabilities that are used in Bayesian causal induction. We develop this idea formally elsewhere (Griffiths, 2005; Griffiths et al., 2004; Griffiths, & Tenenbaum, 2005; Tenenbaum & Griffiths, 2003; Tenenbaum et al., in press; Tenenbaum & Niyogi, 2003), but will use it relatively informally in this paper. These tools provide the foundations of our approach to coincidences. In this section, we use a Bayesian approach to causal induction to develop an account of what makes an event a coincidence, and to delineate the difference between “mere” and “suspicious” coincidences. We then provide a more detailed formal analysis of one simple kind of coincidence – coincidences in coinflips – indicating how this account differs from the idea that coincidences are unlikely events. The section ends by identifying the empirical predictions made by this account, which are tested in the remainder of the paper. What makes a coincidence? Assume that a learner has data d, and a set of hypotheses, h, each being a theory about the system that produced that data. Before seeing any data, the learner assigns prior probabilities of P (h) to these hypotheses. The posterior probability of any hypothesis h after seeing d can be evaluated using Bayes’ rule, P (h | d) = P (d |h)P (h) ∑ h P (d |h)P (h) , (2) where P (d |h), known as the likelihood, specifies the probability of the data d being generated by the system represented by hypothesis h. In the case where there are just two hypotheses, h1 and h0, we can express the relative degree of belief in h1 after seeing d using the posterior odds, P (h1 | d) P (h0 | d) = P (d |h1) P (d |h0) P (h1) P (h0) , (3) which follows directly from Equation 2. The posterior odds are determined by two factors: the likelihood ratio, which indicates the support that d provides in favor of h1 over h0, and the prior From coincidences to discoveries 13 odds, which express the a priori plausibility of h1 as compared to h0. If we take the logarithm of Equation 3, we obtain log P (h1 | d) P (h0 | d) = log P (d |h1) P (d |h0) + log P (h1) P (h0) , (4) in which the log likelihood ratio and the log prior odds combine additively to give the log posterior odds. To make this analysis more concrete, consider the specific example of evaluating whether a new form of genetic engineering influences the sex of rats. The treatment is tested through a series of experiments in which female rats receive a prenatal injection of a chemical, and the sex of their offspring is recorded at birth. In the formal schema above, h1 refers to the theory that injection of the chemical influences sex, and h0 refers to the theory that injection and sex are independent. These two theories generate the causal graphical models Graph 1 and Graph 0 shown in Figure 1. Under Graph 0, the probability that a rat is male should be 0.5, while under Graph 1, rats injected with the chemical have some other probability of being male. Imagine that in the experimental test, the first ten rats were all born male. These data, d, would provide relatively strong support for the existence of a causal relationship, such a relationship seems a priori plausible, and as a consequence you might be inclined to conclude that the relationship exists. Insert Figure 1 about here Now contrast this with a different case of causal induction. A friend insists that she possesses the power of psychokinesis. To test her claim, you flip a coin in front of her while she attempts to influence the outcome. You are evaluating two hypotheses: h1 is the theory that her thoughts can influence the outcome of the coinflip, while h0 is the theory that her thoughts and the coinflip are independent. As in the previous case, these theories generate the causal graphical models Graph 1 and Graph 0 shown in Figure 1. The first ten flips are all heads. The likelihood From coincidences to discoveries 14 ratio for these data, d, provides just as much support for a causal relationship as in the genetic engineering example, but the existence of such a relationship has lower prior probability. As a consequence, you might conclude that she does not possess psychic powers, and that the evidence to the contrary provided by the coinflips was just a coincidence. Coincidences arise when there is a conflict between the evidence an event provides for a theory and our prior beliefs about the plausibility of that theory. More precisely, a coincidence is an event that provides support for an alternative to a current theory, but not enough support to convince us to accept that alternative. This definition can be formalized using the Bayesian machinery introduced above. Assume that h0 denotes the current theory entertained by a learner, and h1 is an alternative postulating the existence of a richer causal structure or novel causal force. In many cases of causal induction, such as establishing whether a chemical influences the sex of rats, we learn about causal relationships that seem relatively plausible, and the likelihood ratio and prior odds in favor of h1 are not dramatically in conflict. A coincidence produces a likelihood ratio in favor of h1 that is insufficient to overwhelm the prior odds against h1, resulting in middling posterior odds. The likelihood ratio provides a measure of the strength of a coincidence, indicating how much support the event provides for h1. Under this definition, the strongest coincidences can only be obtained in settings where the prior odds are equally strongly against h1. Thus, like the test of psychokinesis, canonical coincidences typically involve data that produce a high likelihood ratio in favor of an alternative theory in a context where the current theory is strongly entrenched. Mere and suspicious coincidences Up to this point, we have been relatively loose about our treatment of the term “coincidence”, relying on the familiar phenomenology of surprise associated with these events. However, when people talk about coincidences, they do so in two quite different contexts. The first is in dismissing an event as “just a coincidence”, something that is surprising but ultimately believed to be the work of chance. We will refer to these events as mere coincidences. The second From coincidences to discoveries 15 context in which people talk about coincidences is when an event begins to render an alternative theory plausible. For example, Hacking’s (1983) analysis of the “argument from coincidence” focuses on this sense of coincidence, as does the treatment of coincidences in the study of vision in humans and machines (Barlow, 1985; Binford, 1981; Feldman, 1997; Knill & Richards, 1996; Witkin & Tenenbaum, 1983). We will refer to these events as suspicious coincidences. This distinction raises an interesting question: what determines whether a coincidence is mere or suspicious? Under the account of coincidences outlined above, events can make a transition from coincidence to evidence as the posterior odds in favor of h1 increase. Since being considered a coincidence requires that the posterior odds remain middling, an event ceases being a coincidence and simply becomes evidence if the posterior odds increase. Consideration of the effects of the posterior odds also allows us to accommodate the difference between mere and suspicious coincidences. It is central to our definition of a mere coincidence that it be an event that ultimately results in believing h0 over h1. Consequently, the posterior odds must be low. In a suspicious coincidence, we are left uncertain as to the true state of affairs, and are driven to investigate further. This corresponds to a situation in which the posterior odds do not favor either hypothesis strongly, being around 1 (or 0, for log posterior odds). The relationship between mere coincidences, suspicious coincidences, and unambiguous evidence for h1 is illustrated schematically in Figure 2. Insert Figure 2 about here As indicated in Equations 3 and 4, the posterior odds in favor of h1 increase if either the prior odds or the likelihood ratio increases. Such changes can thus result in a transition from coincidence to evidence, as illustrated in Figure 2. An example of the former was provided above: ten male rats in a row seems like evidence in the context of a genetic engineering experiment, but ten heads in a row is mere coincidence in a test of psychokinesis, where the prior odds are smaller. From coincidences to discoveries 16 Tests of psychokinesis can also be used to illustrate how a change in the likelihood ratio can produce a transition from mere coincidence, through suspicious coincidence, to evidence: ten heads in a row is a mere coincidence, but twenty might begin to raise suspicions about your friend’s powers, or the fairness of the coin. At ninety heads in a row you might, like Guildenstern in Stoppard’s (1967) play, begin entertaining the possibility of divine intervention, having relatively unambiguous evidence that something out of the ordinary is taking place. Coincidences in coinflips We have informally discussed several examples involving flipping a coin. Here, we will make these examples precise, using some tools from Bayesian statistics. This analysis helps to clarify how our framework relates to the idea that coincidences are events of unlikely kinds. Imagine that we have two possible theories about the efficacy of psychokinesis. One theory, h0, stipulates that there can be no relationship between thinking about a coin, and whether the coin comes up heads. Under this theory, the probability that a coin comes up heads is always 0.5. The other theory, h1, stipulates that some people can influence the outcome of a coin toss by focussing their mind appropriately, and specifies the probability of the coin coming up heads under such influence using a parameter ω. Given one person and one coin, each of these theories generates one causal graphical model: h0 generates Graph 0, while h1 generates Graph 1. Assume that the data, d, consists of N trials in the presence of somebody concentrating on a coin, of which NH trials produce heads. Since h0 asserts that these outcomes are all the result of chance, P (d |h0) is just
منابع مشابه
From mere coincidences to meaningful discoveries q
People’s reactions to coincidences are often cited as an illustration of the irrationality of human reasoning about chance. We argue that coincidences may be better understood in terms of rational statistical inference, based on their functional role in processes of causal discovery and theory revision. We present a formal definition of coincidences in the context of a Bayesian framework for ca...
متن کاملFrom mere coincidences to meaningful discoveries q , qq
People’s reactions to coincidences are often cited as an illustration of the irrationality of human reasoning about chance. We argue that coincidences may be better understood in terms of rational statistical inference, based on their functional role in processes of causal discovery and theory revision. We present a formal definition of coincidences in the context of a Bayesian framework for ca...
متن کاملCoincidences Are not Accidental: a Theorem
In this paper, we formalize and prove the statement that coincidences cannot be accidental, a statement that underlies many useful heuristics in mathematics and physics. Our proof uses a version of Kolmogorov complexity, a technique originally developed to describe randomness and “accidentalness”. 1 Coincidences are not Accidental: A Useful Empirical Fact Coincidences happen. In mathematics and...
متن کاملEffect of scatter coincidences, partial volume, positron range and non-colinearity on the quantification of FDOPA Patlak analysis
Introduction: The key characteristics of positron emission tomography (PET) are its quantitative capability and its sensitivity, which allow the in vivo imaging of biochemical interactions with small amounts of tracer concentrations. Therefore, accurate quantification is important. However, it can be sensitive to several physical factors. The aim of this investigation is the assessment of the e...
متن کاملInfants' Early Understanding of Coincidences
Coincidences are surprising events that can provide learners with the opportunity to revise their theories about how the world works. In the current research, we investigate whether infants are truly sensitive to coincidences, detecting these events even when they cannot be predicted mere probabilities. In addition, we explore whether this sensitivity is translated into action, encouraging infa...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2006